Despite the current success of multilingual pre-training, most prior works focus on leveraging monolingual data or bilingual parallel data and overlooked the value of trilingual parallel data. This paper presents \textbf{Tri}angular Document-level \textbf{P}re-training (\textbf{TRIP}), which is the first in the field to extend the conventional monolingual and bilingual pre-training to a trilingual setting by (i) \textbf{Grafting} the same documents in two languages into one mixed document, and (ii) predicting the remaining one language as the reference translation. Our experiments on document-level MT and cross-lingual abstractive summarization show that TRIP brings by up to 3.65 d-BLEU points and 6.2 ROUGE-L points on three multilingual document-level machine translation benchmarks and one cross-lingual abstractive summarization benchmark, including multiple strong state-of-the-art (SOTA) scores. In-depth analysis indicates that TRIP improves document-level machine translation and captures better document contexts in at least three characteristics: (i) tense consistency, (ii) noun consistency and (iii) conjunction presence.
translated by 谷歌翻译
We present Pre-trained Machine Reader (PMR), a novel method to retrofit Pre-trained Language Models (PLMs) into Machine Reading Comprehension (MRC) models without acquiring labeled data. PMR is capable of resolving the discrepancy between model pre-training and downstream fine-tuning of existing PLMs, and provides a unified solver for tackling various extraction tasks. To achieve this, we construct a large volume of general-purpose and high-quality MRC-style training data with the help of Wikipedia hyperlinks and design a Wiki Anchor Extraction task to guide the MRC-style pre-training process. Although conceptually simple, PMR is particularly effective in solving extraction tasks including Extractive Question Answering and Named Entity Recognition, where it shows tremendous improvements over previous approaches especially under low-resource settings. Moreover, viewing sequence classification task as a special case of extraction task in our MRC formulation, PMR is even capable to extract high-quality rationales to explain the classification process, providing more explainability of the predictions.
translated by 谷歌翻译
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis.
translated by 谷歌翻译
多语言机器翻译已被证明是一种有效的策略,可以用单个模型在多种语言之间进行翻译。但是,大多数研究都集中在多语言句子翻译上,而无需考虑跨不同语言生成长文档,这需要了解多语言上下文依赖性,并且通常更难。在本文中,我们首先是天真地纳入辅助多语言数据的辅助目标或源辅助数据对我们感兴趣的源目标对没有任何改进。在这一观察过程中,我们提出了一个名为多语言传递性(MTRAN)的新型框架,以在多语言模型中通过源辅助目标找到一个隐式的最佳途径。为了鼓励MTRANS,我们提出了一种称为三重平行数据(TPD)的新方法,该方法使用包含(源 - 载体,辅助目标和源目标)的平行三重线进行训练。然后,辅助语言充当枢轴,并自动促进隐式信息过渡流,从而更容易翻译。我们进一步提出了一个名为“双向多语言协议”(BI-Magree)的新颖框架,该框架鼓励不同语言之间的双向协议。为了鼓励Bi-Magree,我们提出了一种称为多语言Kullback-Leibler Divergence(MKL)的新颖方法,该方法迫使输入的输出分布具有相同的含义,但以不同的语言彼此一致。实验结果表明,我们的方法对三个文档翻译任务的强大基准进行了一致的改进:IWSLT2015 ZH-EN,DE-EN和VI-EN。我们的分析验证了MTRAN和BI-MAGREE的实用性和存在,我们的框架和方法对合成辅助数据有效。
translated by 谷歌翻译
随着编码器架构的开发,研究人员能够使用更广泛的数据来研究文本生成任务。其中,KB到文本旨在将一组知识三元转换为人类可读句子。在原始设置中,任务假定输入三元和文本是从具体知识/信息的角度进行对齐的。在本文中,我们扩展了此设置,并探讨了如何促进训练的模型以生成更有信息的文本,即包含有关三重实体但未通过输入三元组传达的更多信息。为了解决这个问题,我们提出了一种新型的内存增强发电机,该发电机采用存储网络来记住培训期间学到的有用知识,并利用此类信息以及输入三元组在操作或测试阶段生成文本。我们从WebNLG中得出一个数据集,以进行新的环境,并进行广泛的实验,以研究我们的模型的有效性以及发现设置的内在特征。
translated by 谷歌翻译
课程数据增强(CDA)通过呈现综合数据,从而提高了神经模型,从而使困难从易于努力提高。但是,传统CDA只是将单词扰动的比率视为难度度量,而仅通过一次课程。本文介绍\ textbf {pcc}:\ textbf {p} araphrasing用底部-k采样和\ textbf {c} yclic学习,用于\ textbf {c} urriculum数据增强,这是一种通过paraphrasing的新颖cda框架,该paraphrasing offlosing paraphrasing,该框架利用了paraphrasing,该框架可利用paraphaphraseing,与课程难度度量相似。我们提出了一个由三个单元组成的课程释义生成模块:带有底部K采样的释义候选者,过滤机制和难度度量。我们还提出了一种循环学习策略,该策略多次通过课程。提出了底部K采样来生成后来课程的超硬实例。几乎没有的文本分类以及对话生成的实验结果表明,PCC超过了竞争基线。人类评估和广泛的案例研究表明,底部K采样有效地产生了超硬的实例,PCC显着改善了基线对话代理。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
最近,电子商务平台上的产品问题应答(PQA)引起了越来越幅度的关注,因为它可以作为智能的在线购物助理和改善客户购物体验。它的关键功能,自动回答的产品相关问题的生成,通过旨在在与问题相关的答案时产生内容保存。然而,现有方法忽略了PQA,即个性化的重要特征。提供相同的“完全总结”回答所有客户的回答不足,因为许多客户更愿意通过考虑自己的偏好对产品方面或信息需求的偏好来看待具有定制信息的个性化答案。为了解决这一挑战,我们提出了一种新颖的个性化答复生成方法(页面),具有多视角偏好建模,探讨了历史用户生成的内容,以模拟用户偏好,以在PQA中生成个性化答案。具体而言,我们首先将问题相关的用户历史作为外部知识作为模拟知识级用户偏好。然后我们利用高斯SoftMax分布模型来捕获潜在的方面级别用户偏好。最后,我们通过利用个人用户偏好和动态用户词汇表,开发一个角色感知指针网络以在内容和样式方面生成个性化答案。实验结果对现实世界电子商务QA数据集表明,所提出的方法通过生成信息和定制答案来表明现有方法,并显示电子商务中的答案可以从个性化中受益。
translated by 谷歌翻译
异构信息网络(HIN)捕获各种实体之间的复杂关系,并已广泛用于提高各种数据挖掘任务的有效性,例如在推荐系统中。许多现有的文欣推荐算法利用手工制作的元路径来提取来自网络的语义信息。这些算法依赖于广泛的域知识,可以选择最佳的元路径集。对于HIN与众多节点和链路类型高度复杂的应用程序,手工制作方法的方法太繁琐,并且容易出错。为了解决这个问题,我们提出了基于加强学习的元路径选择(RMS)框架,以选择有效的元路径,并将它们包含在现有的基于元路径的推荐中。为了识别高质量的元路径,RMS列举了基于加强学习(RL)的策略网络(代理),从而从下游推荐任务的性能获取奖励。我们设计一个基于HIN的推荐模型,HREC,有效地使用元路径信息。我们将HREC与RMS进行了整合并导出了我们的推荐解决方案,RMS-HREC,它自动使用有效的元路径。实验对实时数据集表明,我们的算法通过自动捕获重要元路径,可以显着提高推荐模型的性能。
translated by 谷歌翻译
结合PersonAs信息允许在对话响应生成中多样化和接触响应。不幸的是,事先作品主要专注于自我的人物,并忽视了合作伙伴角色的价值。此外,在实际应用中,实际伙伴角色的可用性通常不是这种情况。本文试图通过提供一种新颖的框架来解决这些问题,这些框架利用自动合作伙伴角色生成来增强成功的对话一代。我们将强化学习纳入了一个专门设计的批评网络,以获得奖励判断。自动和人类评估的实验结果表明a)我们的框架能够产生相关,信息丰富的合作伙伴角色,甚至与地面真理合作伙伴角色相比。 b)生成的合作伙伴角色增强了后续的响应生成,从而超越了当在推理阶段缺少合作伙伴角色时超越了我们的基线和比较模型。 c)我们的框架在推理期间产生的响应比我们的基线在地面真理合作伙伴角色上的基线更具信息丰富和参与。 d)我们专门设计的批评批评网络有效地加强了我们的框架。最后,我们的框架提供了更好的解释性,并降低了对伙伴角色的外部数据库的需求。
translated by 谷歌翻译